Feature and instance selection through discriminant analysis criteria
نویسندگان
چکیده
Feature selection and instance are two data preprocessing methods widely used in mining pattern recognition. The main goal is to reduce the computational cost of many learning tasks. Recently, joint feature has been approached by solving some global optimization problems using meta-heuristics. This approach not only computationally expensive, but also does exploit fact that usually have a structured manifold implicitly hidden its labels. In this paper, we address scores derived from discriminant analysis theory. We present three approaches for selection. first scheme wrapper technique, while other schemes filtering techniques. approaches, search process uses genetic algorithm where evaluation criterion mainly given score. score depends simultaneously on subset candidate best corresponding instances. Thus, instances determined finding performance proposed quantified studied image classification with Nearest Neighbor Support Vector Machine Classifiers. Experiments conducted five public datasets. compare our several state-of-the-art methods. experiments performed show superiority over baseline
منابع مشابه
Multiple-instance discriminant analysis
Multiple-instance discriminant analysis (MIDA) is proposed to cope with the feature extraction problem in multiple-instance learning. Similar to MidLABS, MIDA is also derived from linear discriminant analysis (LDA), and both algorithms can be treated as multiple-instance extensions of LDA. Different from MidLABS which learns from the bag level, MIDA is designed from the instance level. MIDA con...
متن کاملKernel discriminant analysis based feature selection
For two-class problems we propose two feature selection criteria based on kernel discriminant analysis (KDA). The first one is the objective function of kernel discriminant analysis called the KDA criterion. We show that the KDA criterion is monotonic for the deletion of features, which ensures stable feature selection. The second one is the recognition rate obtained by a KDA classifier, called...
متن کاملDiscriminant Analysis for Unsupervised Feature Selection
Feature selection has been proven to be efficient in preparing high dimensional data for data mining and machine learning. As most data is unlabeled, unsupervised feature selection has attracted more and more attention in recent years. Discriminant analysis has been proven to be a powerful technique to select discriminative features for supervised feature selection. To apply discriminant analys...
متن کاملIFSB-ReliefF: A New Instance and Feature Selection Algorithm Based on ReliefF
Increasing the use of Internet and some phenomena such as sensor networks has led to an unnecessary increasing the volume of information. Though it has many benefits, it causes problems such as storage space requirements and better processors, as well as data refinement to remove unnecessary data. Data reduction methods provide ways to select useful data from a large amount of duplicate, incomp...
متن کاملSpectral clustering and discriminant analysis for unsupervised feature selection
In this paper, we propose a novel method for unsupervised feature selection, which utilizes spectral clustering and discriminant analysis to learn the cluster labels of data. During the learning of cluster labels, feature selection is performed simultaneously. By imposing row sparsity on the transformation matrix, the proposed method optimizes for selecting the most discriminative features whic...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Soft Computing
سال: 2022
ISSN: ['1433-7479', '1432-7643']
DOI: https://doi.org/10.1007/s00500-022-07513-x